484 research outputs found

    Fine-grained traffic state estimation and visualisation

    No full text
    Tools for visualising the current traffic state are used by local authorities for strategic monitoring of the traffic network and by everyday users for planning their journey. Popular visualisations include those provided by Google Maps and by Inrix. Both employ a traffic lights colour-coding system, where roads on a map are coloured green if traffic is flowing normally and red or black if there is congestion. New sensor technology, especially from wireless sources, is allowing resolution down to lane level. A case study is reported in which a traffic micro-simulation test bed is used to generate high-resolution estimates. An interactive visualisation of the fine-grained traffic state is presented. The visualisation is demonstrated using Google Earth and affords the user a detailed three-dimensional view of the traffic state down to lane level in real time

    Modelling the dispersion of aircraft trajectories using Gaussian processes

    No full text
    This work investigates the application of Gaussian processes to capturing the probability distribution of a set of aircraft trajectories from historical measurement data. To achieve this, all data are assumed to be generated from a probabilistic model that takes the shape of a Gaussian process.The approach to Gaussian process modelling used here is based on a linear expansion of trajectory data into set of basis functions that may be parametrized by a multivariate Gaussian distribution. The parameters are learned through maximum likelihood estimation.The resulting probabilistic model can be used for both modelling the dispersion of trajectories along the common flightpath and for generating new samples that are similar to the historical data.The performance of this approach is evaluated using three trajectory datasets; toy trajectories generated from a Gaussian distribution, sounding rocket trajectories that are generated by a stochastic rocket flight simulator and aircraft trajectories on a given departure path from DFW airport, as measured by ground-based radar. The results indicate that the maximum deviation between the probabilistic model and test data obtained for the three data sets are respectively 4.9%, 7.6% and 13.1%

    Design and testing of a nanoparticle spectrometer

    No full text
    This thesis is concerned with a project to design and test a new Nanoparticle Spectrometer (NPS). The NPS is an instrument designed to make fast measurements of the size distribution and number concentration of aerosol samples containing particles in the size range 5–300nm. The intended application of the NPS is to take time dependant measurements of the aerosols emitted from internal combustion engines. The primary motivation for this work is ultimately the potentially detrimental effects on human health and the environment of combustion generated aerosols.In common with previous aerosol spectrometers, the Nanoparticle Spectrometer consists of a charger to give particles an electrostatic charge, a classifier, which separates the particles in an aerosol sample according to their electrical mobility (a function of size) and an array of counting devices that count the numbers of particles with different mobilities. The novelty of the NPS is the geometry of the instrument, which, it will be argued, has certain advantages.The behaviour of particles in the classifier has been modelled numerically and this model has been used to optimise the classifier geometry. Two charger designs were considered, and two analytical charger models developed and compared. The classifier model was combined with the selected charger model to create a simulation of the instrument operation, which predicts the NPS’ output signal for a given aerosol sample size distribution and number concentration.A prototype NPS was designed, built and tested experimentally. The objective of the experiments was to test the validity of the instrument model and compare the performance of the NPS to an established slow response particulate measuring instrument, the SMPS. The experiments showed good agreement between modelled and measured results, as well as close correlation between the NPS and the SMPS results across most of the instruments range.The experiments also revealed some areas in which the performance of the NPS could be improved; for instance, the modelling of diffusion in the classifier and of the fluid flow in the particle charger

    Supervised learning from human performance at the computationally hard problem of optimal traffic signal control on a network of junctions

    No full text
    Optimal switching of traffic lights on a network of junctions is a computationally intractable problem. In this research, road traffic networks containing signallized junctions are simulated. A computer game interface is used to enable a human ‘player’ to control the traffic light settings on the junctions within the simulation. A supervised learning approach, based on simple neural network classifiers can be used to capture human player's strategies in the game and thus develop a human-trained machine control (HuTMaC) system that approaches human levels of performance. Experiments conducted within the simulation compare the performance of HuTMaC to two well-established traffic-responsive control systems that are widely deployed in the developed world and also to a temporal difference learning-based control method. In all experiments, HuTMaC outperforms the other control methods in terms of average delay and variance over delay. The conclusion is that these results add weight to the suggestion that HuTMaC may be a viable alternative, or supplemental method, to approximate optimization for some practical engineering control problems where the optimal strategy is computationally intractable

    Corporations' use and misuse of evidence to influence health policy:A case study of sugar-sweetened beverage taxation

    Get PDF
    Background: Sugar sweetened beverages (SSB) are a major source of sugar in the diet. Although trends in consumption vary across regions, in many countries, particularly LMICs, their consumption continues to increase. In response, a growing number of governments have introduced a tax on SSBs. SSB manufacturers have opposed such taxes, disputing the role that SSBs play in diet-related diseases and the effectiveness of SSB taxation, and alleging major economic impacts. Given the importance of evidence to effective regulation of products harmful to human health, we scrutinised industry submissions to the South African government's consultation on a proposed SSB tax and examined their use of evidence. Results: Corporate submissions were underpinned by several strategies involving the misrepresentation of evidence. First, references were used in a misleading way, providing false support for key claims. Second, raw data, which represented a pliable, alternative evidence base to peer reviewed studies, was misused to dispute both the premise of targeting sugar for special attention and the impact of SSB taxes on SSB consumption. Third, purposively selected evidence was used in conjunction with other techniques, such as selective quoting from studies and omitting important qualifying information, to promote an alternative evidential narrative to that supported by the weight of peer-reviewed research. Fourth, a range of mutually enforcing techniques that inflated the effects of SSB taxation on jobs, public revenue generation, and gross domestic product, was used to exaggerate the economic impact of the tax. This "hyperbolic accounting" included rounding up figures in original sources, double counting, and skipping steps in economic modelling. Conclusions: Our research raises fundamental questions concerning the bona fides of industry information in the context of government efforts to combat diet-related diseases. The beverage industry's claims against SSB taxation rest on a complex interplay of techniques, that appear to be grounded in evidence, but which do not observe widely accepted approaches to the use of either scientific or economic evidence. These techniques are similar, but not identical, to those used by tobacco companies and highlight the problems of introducing evidence-based policies aimed at managing the market environment for unhealthful commodities

    An automated signalized junction controller that learns strategies from a human expert

    No full text
    An automated signalized junction control system that can learn strategies from a human expert has been developed. This system applies machine learning techniques based on logistic regression and neural networks to affect a classification of state space using evidence data generated when a human expert controls a simulated junction. The state space is constructed from a series of bids from agents, which monitor regions of the road network. This builds on earlier work which has developed the High Bid auctioning agent system to control signalized junctions using localization probe data. For reference the performance of the machine learning signal control strategies are compared to that of High Bid and the MOVA system, which uses inductive loop detectors. Performance is evaluated using simulation experiments on two networks. One is an isolated T-junction and the other is a two junction network modelled on the High Road area of Southampton, UK. The experimental results indicate that machine learning junction control strategies trained by a human expert can outperform High Bid and MOVA both in terms of minimizing average delay and maximizing equitability; where the variance of the distribution over journey times is taken as a quantitative measure of equitability. Further experimental tests indicate that the machine learning control strategies are robust to variation in the positioning accuracy of localization probes and to the fraction of vehicles equipped with probes

    An automated pattern recognition system for the quantification of inflammatory cells in hepatitis-C-infected liver biopsies

    Get PDF
    This paper presents an automated system for the quantification of inflammatory cells in hepatitis-C-infected liver biopsies. Initially, features are extracted from colour-corrected biopsy images at positions of interest identified by adaptive thresholding and clump decomposition. A sequential floating search method and principal component analysis are used to reduce dimensionality. Manually annotated training images allow supervised training. The performance of Gaussian parametric and mixture models is compared when used to classify regions as either inflammatory or healthy. The system is optimized using a response surface method that maximises the area under the receiver operating characteristic curve. This system is then tested on images previously ranked by a number of observers with varying levels of expertise. These results are compared to the automated system using Spearman rank correlation. Results show that this system can rank 15 test images, with varying degrees of inflammation, in strong agreement with five expert pathologists

    Comparison of Satellite-Derived and In-Situ Observations of Ice and Snow Surface Temperatures over Greenland

    Get PDF
    The most practical way to get a spatially broad and continuous measurements of the surface temperature in the data-sparse cryosphere is by satellite remote sensing. The uncertainties in satellite-derived LSTs must be understood to develop internally-consistent decade-scale land-surface temperature (LST) records needed for climate studies. In this work we assess satellite-derived "clear-sky" LST products from the Moderate Resolution Imaging Spectroradiometer (MODIS) and the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), and LSTs derived from the Enhanced Thematic Mapper Plus (ETM+) over snow and ice on Greenland. When possible, we compare satellite-derived LSTs with in-situ air-temperature observations from Greenland Climate Network (GC-Net) automatic-weather stations (AWS). We find that MODIS, ASTER and ETM+ provide reliable and consistent LSTs under clear-sky conditions and relatively-flat terrain over snow and ice targets over a range of temperatures from -40 to 0 C. The satellite-derived LSTs agree within a relative RMS uncertainty of approx.0.5 C. The good agreement among the LSTs derived from the various satellite instruments is especially notable since different spectral channels and different retrieval algorithms are used to calculate LST from the raw satellite data. The AWS record in-situ data at a "point" while the satellite instruments record data over an area varying in size from: 57 X 57 m (ETM+), 90 X 90 m (ASTER), or to 1 X 1 km (MODIS). Surface topography and other factors contribute to variability of LST within a pixel, thus the AWS measurements may not be representative of the LST of the pixel. Without more information on the local spatial patterns of LST, the AWS LST cannot be considered valid ground truth for the satellite measurements, with RMS uncertainty approx.2 C. Despite the relatively large AWS-derived uncertainty, we find LST data are characterized by high accuracy but have uncertain absolute precision

    Detection of the secondary eclipse of WASP-10b in the Ks-band

    Full text link
    WASP-10b, a non-inflated hot Jupiter, was discovered around a K-dwarf in a near circular orbit (\sim 0.060.06). Since its discovery in 2009, different published parameters for this system have led to a discussion about the size, density, and eccentricity of this exoplanet. In order to test the hypothesis of a circular orbit for WASP-10b, we have observed its secondary eclipse in the Ks-band, where the contribution of planetary light is high enough to be detected from the ground. Observations were performed with the OMEGA2000 instrument at the 3.5-meter telescope at Calar Alto (Almer\'ia, Spain), in staring mode during 5.4 continuous hours, with the telescope defocused, monitoring the target during the expected secondary eclipse. A relative light curve was generated and corrected from systematic effects, using the Principal Component Analysis (PCA) technique. The final light curve was fitted using a transit model to find the eclipse depth and a possible phase shift. The best model obtained from the Markov Chain Monte Carlo analysis resulted in an eclipse depth of ΔF\Delta F of 0.137%0.019%+0.013%0.137\%^{+0.013\%}_{-0.019\%} and a phase offset of Δϕ\Delta \phi of 0.00280.0004+0.0005-0.0028^{+0.0005}_{-0.0004}. The eclipse phase offset derived from our modeling has systematic errors that were not taken into account and should not be considered as evidence of an eccentric orbit. The offset in phase obtained leads to a value for ecosω|e\cos{\omega}| of 0.00440.0044. The derived eccentricity is too small to be of any significance.Comment: 8 pages, 10 figure
    corecore